Current Issue : October - December Volume : 2015 Issue Number : 4 Articles : 4 Articles
In this paper we focus on the task of rating solutions\nto a programming exercise. State-of-the-art rating methods\ngenerally examine each solution against an exhaustive set\nof test cases, typically designed manually. Hence an issue of\ncompleteness arises.We propose the application of bounded\nmodel checking to the automatic generation of test cases.\nThe experimental evaluation we have performed reveals a\nsubstantial increase in accuracy of ratings at a cost of a moderate\nincrease in computation resources needed. Most importantly,\napplication of model checking leads to the finding of\nerrors in solutions that would previously have been classified\nas correct....
Background: Establishing representative samples for Software Engineering surveys\nis still considered a challenge. Specialized literature often presents limitations on\ninterpreting surveys� results, mainly due to the use of sampling frames established by\nconvenience and non-probabilistic criteria for sampling from them. In this sense, we\nargue that a strategy to support the systematic establishment of sampling frames\nfrom an adequate source of sampling can contribute to improve this scenario.\nMethod: A conceptual framework for supporting large scale sampling in Software\nEngineering surveys has been organized after performing a set of experiences on\ndesigning such strategies and gathering evidence regarding their benefits. The use of\nthis conceptual framework based on a sampling strategy developed for supporting\nthe replication of a survey on characteristics of agility and agile practices in software\nprocesses is depicted in this paper.\nResult: A professional social network (Linkedln) was established as the source of\nsampling and its groups of interest as the units for searching members to be\nrecruited. It allowed to deal with a sampling frame composed by more than\n110,000 members (prospective subjects) distributed over 19 groups of interest.\nThen, through the similarity levels observed among these groups, eight strata\nwere organized and 7745 members were invited, from which 291 have confirmed\nparticipation and answered the questionnaire.\nConclusion: The heterogeneity and number of participants in this replication\ncontributed to improve the strength of original survey�s results. Therefore, we\nbelieve the sharing of this experience, the instruments and plan can be helpful for\nthose researchers and practitioners interested on executing large scale surveys in\nSoftware Engineering....
Object-oriented technology has widely accepted as the preferred way for large as well as small-scale system design. By using this advanced technology we can develop software product of lower maintenance cost and higher quality. It is clear that the available traditional software metrics are insufficient for case of object-oriented software. As a result of that, a set of new object oriented software metrics came into existence.The software test metrics which are measure of quality of the code, are very important elements of quality driven testing. They help in assuring that whether the software is appropriate or not. Result shows, the best metrics suite which covers almost all the software quality assurance features is CK metrics. Measurement of structural quality of code is necessity for ensuring overall quality of the code. Cost of maintaining and developing software is highly affected by quality and complexity of the software. In this research paper we look into several object oriented metrics as well as general code quality metrics proposed by various well known researchers. These object oriented metrics and general code quality metrics are then applied to several java programs to analyze the structural code quality of software product. Software reliability, readability, maintainability, reusability can be calculated based on the values of such metrics. These results are used for improving overall quality of the code and helping in reduction of maintenance cost....
Background: Writing patches to fix bugs or implement new features is an important\nsoftware development task, as it contributes to raise the quality of a software system.\nNot all patches are accepted in the first attempt, though. Patches can be rejected\nbecause of problems found during code review, automated testing, or manual testing.\nA high rejection rate, specially later in the lifecycle, may indicate problems with the\nsoftware development process.\nOur objective is to better understand the relationship among different forms of patch\nrejection and to characterize their frequency within a project. This paper describes one\nstep towards this objective, by presenting an analysis of a large open source project,\nFirefox.\nMethod: In order to characterize patch rejection, we relied on issues and source code\ncommits from over four years of the projectââ?¬â?¢s history. We computed monthly metrics\non the occurrence of three indicators of patch rejectionââ?¬â?negative code reviews,\ncommit backouts, and bug reopeningââ?¬â?and measured the time it takes both to submit\na patch and to reject inappropriate patches.\nResults: In Firefox, 20 % of the issues contain rejected patches. Negative reviews,\nback outs, and issue reopening are relatively independent events; in particular, about\n70 % of issue re openings are premature; 75 % of all inappropriate changes are rejected\nwithin four days.\nConclusions: Patch rejection is a frequent event, occurring multiple times a day. Given\nthe relative independence of rejection types, existing studies that focus on one single\nrejection type fail to detect many rejections. Although inappropriate changes cause\nrework, they have little effect on the quality of released versions of Firefox....
Loading....